52 research outputs found

    Magnetic control of the pair creation in spatially localized supercritical fields

    Get PDF
    We examine the impact of a perpendicular magnetic field on the creation mechanism of electron-positron pairs in a supercritical static electric field, where both fields are localized along the direction of the electric field. In the case where the spatial extent of the magnetic field exceeds that of the electric field, quantum field theoretical simulations based on the Dirac equation predict a suppression of pair creation even if the electric field is supercritical. Furthermore, an arbitrarily small magnetic field outside the interaction zone can bring the creation process even to a complete halt, if it is sufficiently extended. The mechanism for this magnetically induced complete shutoff can be associated with a reopening of the mass gap and the emergence of electrically dressed Landau levels

    Conversational question answering: a survey

    Get PDF
    Published online: 6 September 2022Question answering (QA) systems provide a way of querying the information available in various formats including, but not limited to, unstructured and structured data in natural languages. It constitutes a considerable part of conversational artificial intelligence (AI) which has led to the introduction of a special research topic on conversational question answering (CQA), wherein a system is required to understand the given context and then engages in multi-turn QA to satisfy a user’s information needs. While the focus of most of the existing research work is subjected to single-turn QA, the field of multi-turn QA has recently grasped attention and prominence owing to the availability of large-scale, multi-turn QA datasets and the development of pre-trained language models. With a good amount of models and research papers adding to the literature every year recently, there is a dire need of arranging and presenting the related work in a unified manner to streamline future research. This survey is an effort to present a comprehensive review of the state-of-the-art research trends of CQA primarily based on reviewed papers over the recent years. Our findings show that there has been a trend shift from single-turn to multi-turn QA which empowers the field of Conversational AI from different perspectives. This survey is intended to provide an epitome for the research community with the hope of laying a strong foundation for the field of CQA.Munazza Zaib, Wei Emma Zhang, Quan Z. Sheng, Adnan Mahmood, Yang Zhan

    CupMar: A deep learning model for personalized news recommendation based on contextual user-profile and multi-aspect article representation

    Get PDF
    OnlinePublIn modern days, making recommendation for news articles poses a great challenge due to vast amount of online information. However, providing personalized recommendations from news articles, which are the sources of condense textual information is not a trivial task. A recommendation system needs to understand both the textual information of a news article, and the user contexts in terms of long-term and temporary preferences via the user’s historic records. Unfortunately, many existing methods do not possess the capability to meet such need. In this work, we propose a neural deep news recommendation model called CupMar, that not only is able to learn the user-profile representation in different contexts, but also is able to leverage the multi-aspects properties of a news article to provide accurate, personalized news recommendations to users. The main components of our CupMar approach include the News Encoder and the User-Profile Encoder. Specifically, the News Encoder uses multiple properties such as news category, knowledge entity, title and body content with advanced neural network layers to derive informative news representation, while the User-Profile Encoder looks through a user’s browsed news, infers both of her long-term and recent preference contexts to encode a user representation, and finds the most relevant candidate news for her. We evaluate our CupMar model with extensive experiments on the popular Microsoft News Dataset (MIND), and demonstrate the strong performance of our approach.Dai Hoang Tran, Quan Z. Sheng, Wei Emma Zhang, Nguyen H. Tran, Nguyen Lu Dang Kho

    A novel context ontology to facilitate interoperation of semantic services in environments with wearable devices

    Full text link
    The LifeWear-Mobilized Lifestyle with Wearables (Lifewear) project attempts to create Ambient Intelligence (AmI) ecosystems by composing personalized services based on the user information, environmental conditions and reasoning outputs. Two of the most important benefits over traditional environments are 1) take advantage of wearable devices to get user information in a nonintrusive way and 2) integrate this information with other intelligent services and environmental sensors. This paper proposes a new ontology composed by the integration of users and services information, for semantically representing this information. Using an Enterprise Service Bus, this ontology is integrated in a semantic middleware to provide context-aware personalized and semantically annotated services, with discovery, composition and orchestration tasks. We show how these services support a real scenario proposed in the Lifewear project

    Ambient and context-aware services

    No full text
    Quan Z. Sheng, Elhadi M. Shakshuk

    A short survey of pre-trained language models for conversational AI-A new age in NLP

    No full text
    Building a dialogue system that can communicate naturally with humans is a challenging yet interesting problem of agent-based computing. The rapid growth in this area is usually hindered by the long-standing problem of data scarcity as these systems are expected to learn syntax, grammar, decision making, and reasoning from insufficient amounts of task-specific dataset. The recently introduced pre-trained language models have the potential to address the issue of data scarcity and bring considerable advantages by generating contextualized word embeddings. These models are considered counterpart of ImageNet in NLP and have demonstrated to capture different facets of language such as hierarchical relations, long-term dependency, and sentiment. In this short survey paper, we discuss the recent progress made in the field of pre-trained language models. We also deliberate that how the strengths of these language models can be leveraged in designing more engaging and more eloquent conversational agents. This paper, therefore, intends to establish whether these pre-trained models can overcome the challenges pertinent to dialogue systems, and how their architecture could be exploited in order to overcome these challenges. Open challenges in the field of dialogue systems have also been deliberated.Munazza Zaib, Quan Z. Sheng, Wei Emma Zhan

    Analyzing the sensitivity of deep neural networks for sentiment analysis: A scoring approach

    No full text
    Part of IEEE WCCI 2020 is the world’s largest technical event on computational intelligence, featuring the three flagship conferences of the IEEE Computational Intelligence Society (CIS) under one roof: The 2020 International Joint Conference on Neural Networks (IJCNN 2020); the 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2020); and the 2020 IEEE Congress on Evolutionary Computation (IEEE CEC 2020).Deep Neural Networks (DNNs) have gained significant popularity in various Natural Language Processing tasks. However, the lack of interpretability of DNNs induces challenges to evaluate the robustness of DNNs. In this paper, we particularly focus on DNNs on sentiment analysis and conduct an empirical investigation on the sensitivity of DNNs. Specifically, we apply a scoring function to rank words importance without depending on the parameters or structure of the deep neural model. Then, we scan characteristics of these words to identify the model’s weakness and perturb words to craft targeted attacks that exploit this weakness. We conduct extensive experiments on different neural network models across several real-world datasets. We report four intriguing findings: i) modern deep learning models for sentiment analysis ignore important sentiment terms such as opinion adjectives (i.e., amazing or terrible), ii) adjective words contribute to fooling sentiment analysis models more than other Parts-of-Speech (POS) categories, iii) changing or removing up to 10 adjectives words in a review text only decreases the accuracy up to 2%, and iv) modern models are unable to recognize the difference between an objective and a subjective review text¹.Ahoud Alhazmi, Wei Emma Zhang, Quan Z Sheng, and Abdulwahab Aljubair

    Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering

    No full text
    Having an intelligent dialogue agent that can engage in conversational question answering (ConvQA) is now no longer limited to Sci-Fi movies only and has, in fact, turned into a reality. These intelligent agents are required to understand and correctly interpret the sequential turns provided as the context of the given question. However, these sequential questions are sometimes left implicit and thus require the resolution of some natural language phenomena such as anaphora and ellipsis. The task of question rewriting has the potential to address the challenges of resolving dependencies amongst the contextual turns by transforming them into intent-explicit questions. Nonetheless, the solution of rewriting the implicit questions comes with some potential challenges such as resulting in verbose questions and taking conversational aspect out of the scenario by generating the self-contained questions. In this paper, we propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues to enhance the capability of the QA model to better interpret the incomplete questions. We also deliberate how the strengths of this task could be leveraged in a bid to design more engaging and more eloquent conversational agents. We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.Munazza Zaib, Quan Z. Sheng, Wei Emma Zhang, and Adnan Mahmoo

    Feature analysis for duplicate detection in programming QA communities

    No full text
    In community question answering (CQA), duplicate questions are questions that were previously created and answered but occur again. These questions produce noises in the CQA websites which impede users to find answers efficiently. Programming CQA (PCQA), a branch of CQA that holds questions related to programming, also suffers from this problem. Existing works on duplicate detection in PCQA websites framed the task as a supervised learning task on the question pairs, and relied on a number of extracted features of the question pairs. But they extracted only textual features and did not consider the source code in the questions, which are linguistically very different to natural languages. Our work focuses on developing novel features for PCQA duplicate detection. We leverage continuous word vectors from the deep learning literature, probabilistic models in information retrieval and association pairs mined from duplicate questions using machine translation. We provide extensive empirical analysis on the performance of these features and their various combinations using a range of learning models. Our work could be helpful for both research works and practical applications that require extracting features from texts that are not all natural languages.Wei Emma Zhang, Quan Z. Sheng, Yanjun Shu, and Vanh Khuyen Nguye
    • …
    corecore